最先进的3D感知生成模型依赖于基于坐标的MLP来参数化3D辐射场。在证明令人印象深刻的结果的同时,请查询每个沿每个射线样品的MLP,都会导致渲染缓慢。因此,现有方法通常会呈现低分辨率特征图,并通过UPSMPLING网络处理以获取最终图像。尽管有效,神经渲染通常纠缠于观点和内容,从而改变摄像头会导致几何或外观的不必要变化。在基于体素的新型视图合成中的最新结果中,我们研究了本文中稀疏体素电网表示的快速和3D一致生成建模的实用性。我们的结果表明,当将稀疏体素电网与渐进式生长,自由空间修剪和适当的正则化结合时,单层MLP确实可以被3D卷积代替。为了获得场景的紧凑表示并允许缩放到更高的体素分辨率,我们的模型将前景对象(以3D模型)从背景(以2D模型建模)中。与现有方法相反,我们的方法仅需要单个正向通行证来生成完整的3D场景。因此,它允许从任意观点呈现有效渲染,同时以高视觉保真度产生3D一致的结果。
translated by 谷歌翻译
由于其简单性和最先进的性能,神经辐射场(NERF)被出现为新型视图综合任务的强大表示。虽然NERF可以在许多输入视图可用时产生看不见的观点的光静观渲染,但是当该数量减少时,其性能显着下降。我们观察到,稀疏输入方案中的大多数伪像是由估计场景几何中的错误引起的,并且在训练开始时通过不同的行为引起。我们通过规范从未观察的视点呈现的修补程序的几何和外观来解决这一点,并在训练期间退火光线采样空间。我们还使用规范化的流模型来规范未观察的视点的颜色。我们的车型不仅优于优化单个场景的其他方法,而是在许多情况下,还有条件模型,这些模型在大型多视图数据集上广泛预先培训。
translated by 谷歌翻译
近年来,由于其表达力和灵活性,神经隐式表示在3D重建中获得了普及。然而,神经隐式表示的隐式性质导致缓慢的推理时间并且需要仔细初始化。在本文中,我们重新审视经典且无处不在的点云表示,并使用泊松表面重建(PSR)的可分辨率配方引入可分化的点对网格层,其允许给予定向的GPU加速的指示灯的快速解决方案点云。可微分的PSR层允许我们通过隐式指示器字段有效地和分散地桥接与3D网格的显式3D点表示,从而实现诸如倒角距离的表面重建度量的端到端优化。因此,点和网格之间的这种二元性允许我们以面向点云表示形状,这是显式,轻量级和富有表现力的。与神经内隐式表示相比,我们的形状 - 点(SAP)模型更具可解释,轻量级,并通过一个级别加速推理时间。与其他显式表示相比,如点,补丁和网格,SA​​P产生拓扑无关的水密歧管表面。我们展示了SAP对无知点云和基于学习的重建的表面重建任务的有效性。
translated by 谷歌翻译
Deep generative models allow for photorealistic image synthesis at high resolutions. But for many applications, this is not enough: content creation also needs to be controllable. While several recent works investigate how to disentangle underlying factors of variation in the data, most of them operate in 2D and hence ignore that our world is three-dimensional. Further, only few works consider the compositional nature of scenes. Our key hypothesis is that incorporating a compositional 3D scene representation into the generative model leads to more controllable image synthesis. Representing scenes as compositional generative neural feature fields allows us to disentangle one or multiple objects from the background as well as individual objects' shapes and appearances while learning from unstructured and unposed image collections without any additional supervision. Combining this scene representation with a neural rendering pipeline yields a fast and realistic image synthesis model. As evidenced by our experiments, our model is able to disentangle individual objects and allows for translating and rotating them in the scene as well as changing the camera pose.
translated by 谷歌翻译
While 2D generative adversarial networks have enabled high-resolution image synthesis, they largely lack an understanding of the 3D world and the image formation process. Thus, they do not provide precise control over camera viewpoint or object pose. To address this problem, several recent approaches leverage intermediate voxel-based representations in combination with differentiable rendering. However, existing methods either produce low image resolution or fall short in disentangling camera and scene properties, e.g., the object identity may vary with the viewpoint. In this paper, we propose a generative model for radiance fields which have recently proven successful for novel view synthesis of a single scene. In contrast to voxelbased representations, radiance fields are not confined to a coarse discretization of the 3D space, yet allow for disentangling camera and scene properties while degrading gracefully in the presence of reconstruction ambiguity. By introducing a multi-scale patch-based discriminator, we demonstrate synthesis of high-resolution images while training our model from unposed 2D images alone. We systematically analyze our approach on several challenging synthetic and real-world datasets. Our experiments reveal that radiance fields are a powerful representation for generative image synthesis, leading to 3D consistent models that render with high fidelity.
translated by 谷歌翻译
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction. While demonstrating promising results, most implicit approaches are limited to comparably simple geometry of single objects and do not scale to more complicated or large-scale scenes. The key limiting factor of implicit methods is their simple fullyconnected network architecture which does not allow for integrating local information in the observations or incorporating inductive biases such as translational equivariance. In this paper, we propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes. By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space. We investigate the effectiveness of the proposed representation by reconstructing complex geometry from noisy point clouds and low-resolution voxel representations. We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
translated by 谷歌翻译
Learning-based 3D reconstruction methods have shown impressive results. However, most methods require 3D supervision which is often hard to obtain for real-world datasets. Recently, several works have proposed differentiable rendering techniques to train reconstruction models from RGB images. Unfortunately, these approaches are currently restricted to voxel-and mesh-based representations, suffering from discretization or low resolution. In this work, we propose a differentiable rendering formulation for implicit shape and texture representations. Implicit representations have recently gained popularity as they represent shape and texture continuously. Our key insight is that depth gradients can be derived analytically using the concept of implicit differentiation. This allows us to learn implicit shape and texture representations directly from RGB images. We experimentally show that our singleview reconstructions rival those learned with full 3D supervision. Moreover, we find that our method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes.
translated by 谷歌翻译
In recent years, substantial progress has been achieved in learning-based reconstruction of 3D objects. At the same time, generative models were proposed that can generate highly realistic images. However, despite this success in these closely related tasks, texture reconstruction of 3D objects has received little attention from the research community and state-of-the-art methods are either limited to comparably low resolution or constrained experimental setups. A major reason for these limitations is that common representations of texture are inefficient or hard to interface for modern deep learning techniques. In this paper, we propose Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network. Our approach circumvents limiting factors like shape discretization and parameterization, as the proposed texture representation is independent of the shape representation of the 3D object. We show that Texture Fields are able to represent high frequency texture and naturally blend with modern deep learning techniques. Experimentally, we find that Texture Fields compare favorably to state-of-the-art methods for conditional texture reconstruction of 3D objects and enable learning of probabilistic generative models for texturing unseen 3D models. We believe that Texture Fields will become an important building block for the next generation of generative 3D models.
translated by 谷歌翻译
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity. However, unlike for images, in 3D there is no canonical representation which is both computationally and memory efficient yet allows for representing high-resolution geometry of arbitrary topology. Many of the state-of-the-art learningbased 3D reconstruction approaches can hence only represent very coarse 3D geometry or are limited to a restricted domain. In this paper, we propose Occupancy Networks, a new representation for learning-based 3D reconstruction methods. Occupancy networks implicitly represent the 3D surface as the continuous decision boundary of a deep neural network classifier. In contrast to existing approaches, our representation encodes a description of the 3D output at infinite resolution without excessive memory footprint. We validate that our representation can efficiently encode 3D structure and can be inferred from various kinds of input. Our experiments demonstrate competitive results, both qualitatively and quantitatively, for the challenging tasks of 3D reconstruction from single images, noisy point clouds and coarse discrete voxel grids. We believe that occupancy networks will become a useful tool in a wide variety of learning-based 3D tasks.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译